In order to assist the drug discovery/development process, pharmaceutical companies often apply biomedical NER and linking techniques over internal and public corpora. Decades of study of the field of BioNLP has produced a plethora of algorithms, systems and datasets. However, our experience has been that no single open source system meets all the requirements of a modern pharmaceutical company. In this work, we describe these requirements according to our experience of the industry, and present Kazu, a highly extensible, scalable open source framework designed to support BioNLP for the pharmaceutical sector. Kazu is a built around a computationally efficient version of the BERN2 NER model (TinyBERN2), and subsequently wraps several other BioNLP technologies into one coherent system. KAZU framework is open-sourced: https://github.com/AstraZeneca/KAZU
translated by 谷歌翻译
基于变压器的模型已在各个领域(例如自然语言处理和计算机视觉)中广泛使用并实现了最先进的性能。最近的作品表明,变压器也可以推广到图形结构化数据。然而,由于技术挑战,诸如节点数量和非本地聚集的技术挑战之类的技术挑战,因此成功限于小规模图,这通常会导致对常规图神经网络的概括性能。在本文中,为了解决这些问题,我们提出了可变形的图形变压器(DGT),以动态采样的键和值对进行稀疏注意。具体而言,我们的框架首先构建具有各种标准的多个节点序列,以考虑结构和语义接近。然后,将稀疏的注意力应用于节点序列,以减少计算成本,以学习节点表示。我们还设计简单有效的位置编码,以捕获节点之间的结构相似性和距离。实验表明,我们的新型图形变压器始终胜过现有的基于变压器的模型,并且与8个图形基准数据集(包括大型图形)的最新模型相比,与最新的模型相比表现出竞争性能。
translated by 谷歌翻译
图形神经网络(GNN)已被广泛应用于各种领域,以通过图形结构数据学习。在各种任务(例如节点分类和图形分类)中,他们对传统启发式方法显示了显着改进。但是,由于GNN严重依赖于平滑的节点特征而不是图形结构,因此在链接预测中,它们通常比简单的启发式方法表现出差的性能,例如,结构信息(例如,重叠的社区,学位和最短路径)至关重要。为了解决这一限制,我们建议邻里重叠感知的图形神经网络(NEO-GNNS),这些神经网络(NEO-GNNS)从邻接矩阵中学习有用的结构特征,并估算了重叠的邻域以进行链接预测。我们的Neo-Gnns概括了基于社区重叠的启发式方法,并处理重叠的多跳社区。我们在开放图基准数据集(OGB)上进行的广泛实验表明,NEO-GNNS始终在链接预测中实现最新性能。我们的代码可在https://github.com/seongjunyun/neo_gnns上公开获取。
translated by 谷歌翻译
Recent developments of dense retrieval rely on quality representations of queries and contexts coming from pre-trained query and context encoders. In this paper, we introduce TouR (test-time optimization of query representations), which further optimizes instance-level query representations guided by signals from test-time retrieval results. We leverage a cross-encoder re-ranker to provide fine-grained pseudo labels over retrieval results and iteratively optimize query representations with the gradient descent method. Our theoretical analysis reveals that TouR can be viewed as a generalization of the classical Rocchio's algorithm for pseudo relevance feedback, and we present two variants leveraging psuedo labels as either hard binary or soft continuous labels. We first apply TouR on phrase retrieval with our proposed phrase re-ranker. On passage retrieval, we demonstrate its effectiveness with an off-the-shelf re-ranker. TouR improves the end-to-end open-domain QA accuracy significantly, as well as passage retrieval performance. Compared to re-ranker, TouR requires a smaller number of candidates, and achieves consistently better performance and runs up to 4x faster with our efficient implementation.
translated by 谷歌翻译
在生物医学自然语言处理中,命名实体识别(NER)和命名实体归一化(NEN)是能够从不断增长的生物医学文献中自动提取生物医学实体(例如,疾病和化学品)的关键任务。在本文中,我们展示了伯尔尼(高级生物医学实体识别和归一化),这是一种改善以前的基于神经网络的NER工具的工具(Kim等,2019),采用多任务NER模型和基于神经网络的NEN模型实现更快,更准确的推理。我们希望我们的工具可以帮助为各种任务等诸如生物医学知识图形建设等各种任务来诠释大规模生物医学文本。
translated by 谷歌翻译
命名实体识别(ner)是从文本中提取特定类型的命名实体的任务。当前的NER模型往往依赖于人类注释的数据集,要求在目标领域和实体上广泛参与专业知识。这项工作介绍了一个询问生成的方法,它通过询问反映实体类型的需求的简单自然语言问题来自动生成NER数据集(例如,哪种疾病?)到开放式域问题应答系统。不使用任何域中资源(即,培训句子,标签或域名词典),我们的模型在我们生成的数据集上仅培训了,这在很大程度上超过了四个不同域的六个基准测试的弱势监督模型。令人惊讶的是,在NCBI疾病中,我们的模型达到75.5 F1得分,甚至优于以前的最佳弱监督模型4.1 F1得分,它利用域专家提供的丰富的域名词典。制定具有自然语言的NER的需求,也允许我们为诸如奖项等细粒度实体类型构建NER模型,其中我们的模型甚至优于完全监督模型。在三个少量的NER基准测试中,我们的模型实现了新的最先进的性能。
translated by 谷歌翻译
本文是关于我们的系统提交给生物重建VII轨道2挑战的化学识别任务的技术报告。这一挑战的主要特点是数据包括全文文章,而当前数据集通常由只有标题和摘要组成。为了有效解决该问题,我们的目的是使用各种方法改进标记一致性和实体覆盖,例如在与命名实体识别(ner)的相同文章中的多数投票和组合字典和神经模型进行归一化的混合方法。在NLM-Chem数据集的实验中,我们表明我们的方法改善了模型的性能,特别是在召回方面。最后,在对挑战的官方评估中,我们的系统通过大幅表现出基线模型和来自16支队伍的超过80个提交来排名第一。
translated by 谷歌翻译
当前在提取问题答案(EQA)中进行的研究对单跨度提取设置进行了建模,其中单个答案跨度是可以预测给定问题对对的标签。对于通用域EQA来说,这种设置是自然的,因为可以单个跨度可以回答通用域中的大多数问题。遵循通用域EQA模型,当前的生物医学EQA(BIOEQA)模型利用单跨度提取设置,采用后处理步骤。在本文中,我们调查了整个普通和生物医学领域的问题分布,发现生物医学问题更可能需要列表型答案(多个答案),而不是Factoid-type答案(单个答案)。这需要能够为问题提供多个答案的模型。基于这项初步研究,我们为Bioeqa提出了一种序列标记方法,Bioeqa是一种多跨度提取设置。我们的方法直接以不同数量的短语作为答案来解决问题,并可以学会从培训数据中确定问题的答案数量。我们在BioASQ 7B和8B列表类型问题上的实验结果优于表现最佳的现有模型,而无需进行后处理步骤。源代码和资源可免费下载,网址为https://github.com/dmis-lab/seqtagqa
translated by 谷歌翻译
执行命名实体识别(ner)时,实体长度是可变的,并且依赖于特定域或数据集。预先训练的语言模型(PLM)用于解决NER任务,并且倾向于偏向于数据集模式,例如长度统计,表面形式和偏斜类分布。这些偏差阻碍了PLMS的泛化能力,这对于在现实世界情况下解决许多看不见的提及是必要的。我们提出了一种新型的脱叠方法雷鬼,以改善不同长度的实体的预测。要缩小评估与实际情况之间的差距,我们在包含看不见组的分区基准数据集上评估了PLMS。在这里,Regler对长期目的进行了重大改进,可以通过在实体内的结合或特殊字符上进行扩展来预测。此外,大多数ner数据集中存在严重的类别不平衡,导致易消极的例子在训练期间支配,例如“”。我们的方法通过降低易消极的例子的影响来减轻偏斜阶级分布。关于生物医学和一般域的广泛实验证明了我们方法的泛化能力。为了促进可重复性和未来的工作,我们发布了我们的代码。“https://github.com/minstar/regler”
translated by 谷歌翻译
Motivation: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results: We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts.
translated by 谷歌翻译